Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tudor/aggressive header compression #1502

Merged
merged 17 commits into from
Sep 12, 2023

Conversation

tudor-malene
Copy link
Collaborator

@tudor-malene tudor-malene commented Sep 5, 2023

Why this change is needed

add compression of the batches

What changes were made as part of this PR

  • no longer compress the actual batch headers
  • instead, create a data structure that allows the recreation of the headers (see readme)
  • remove some POBI checks from the validation

PR checks pre-merging

Please indicate below by ticking the checkbox that you have read and performed the required
PR checks

  • PR checks reviewed and performed

BatchTimeDeltas []*big.Int // todo - minimize assuming a default of 1 sec and then store only exceptions

ReOrgs []*BatchHeader // sparse list of reorged headers - non null only for reorgs.
L1HeightDeltas []*big.Int // Contains the L1 height on the indexes where it changes. Todo - can be optimised with deltas
Copy link
Contributor

@StefanIliev545 StefanIliev545 Sep 6, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This field name and the description are confusing.

func (rc *RollupCompression) CreateExtRollup(r *core.Rollup) (*common.ExtRollup, error) {
header, err := rc.createRollupHeader(r.Batches)
if err != nil {
return nil, err
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be good to wrap those errors in mroe descriptive ones; When running through errors in the logs it can be hard to find out during which bit of processing they happened.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know!
I was thinking we should use a library everywhere that does the wrapping and recreates some sort of sane stacktraces. I hate to manually wrap everything, and write some random repetitive text just to know where something happened.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://pkg.go.dev/github.com/pkg/errors - I think this one automatically creates stack traces

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's do this as part of a different work. And add everywhere.
We can use this library or another one

@@ -208,7 +208,7 @@ func ToExtRollupMsg(rollup *common.ExtRollup) generated.ExtRollupMsg {
return generated.ExtRollupMsg{}
}

return generated.ExtRollupMsg{Header: ToRollupHeaderMsg(rollup.Header), BatchPayloads: rollup.BatchPayloads, BatchHeaders: rollup.BatchHeaders}
return generated.ExtRollupMsg{Header: ToRollupHeaderMsg(rollup.Header), BatchPayloads: rollup.BatchPayloads, BatchHeaders: rollup.CalldataRollupHeader}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should BatchHeaders name be changed ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's ok. It is the compressed batch headers

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I think the name is fine! But it seems like the common.ExtRollup.BatchHeaders was changed to common.ExtRollup.CalldataRollupHeader so maybe the generated.ExtRollupMsg.BatchHeaders to generated.ExtRollupMsg.CalldataRollupHeader change was missed ?

go/enclave/components/batch_registry.go Show resolved Hide resolved
4. The Signatures over the batches are not stored, since the rollup is itself signed.
5. The cross chain messages are calculated.
*/
type RollupCompression struct {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be cool to have a unit tests for this struct with a few different data scenarios.

go/enclave/components/rollup_compression.go Show resolved Hide resolved
Comment on lines 161 to 170
if can.Hash() != batch.Hash() {
// if the canonical batch of the same height is different from the current batch
// then add the entire header to a "reorgs" array
reorgs[i] = batch.Header
isReorg = true
} else {
reorgs[i] = nil
}
batchHashes[i] = batch.Hash()
batchHeaders[i] = batch.Header
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if the append() was used here for the reorgs and reorgs was a var regorgs []*common.BatchHeader instead of an initialized array? Would that simplify the reorgs array ? Then checking for len(reorgs) would be the same as isReorg ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm still trying to find the best way to express this. It's very fiddly, and there is still an off-by-one error somewhere when there are reorgs

@tudor-malene tudor-malene marked this pull request as ready for review September 12, 2023 08:13
Copy link
Collaborator

@BedrockSquirrel BedrockSquirrel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM but a lot to take in! Couple of minor comments

reorgs := make([]*common.BatchHeader, len(batches))

deltaTimes := make([]*big.Int, len(batches))
startTime := batches[0].Header.Time
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might be worth checking that the batches list isn't empty here? Avoid panic risk/give back a useful error if things get weird.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there is such a check when the "Rollup" is created.

// get the first canonical batch
firstCanonBatchHeight := batches[0].Number()
firstCanonParentHash := batches[0].Header.ParentHash
for i, reorg := range reorgs {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Took me a while to figure out this loop, maybe worth a little comment to say it's looping to find the first batch with no reorg or something.

Also, weird edge case to think about, if we somehow had a rollup that was only reorgs, would it brick the network because the first canon data wouldn't align with other rollups?

// get the first canonical batch
firstCanonBatchHeight := batches[0].Number()
firstCanonParentHash := batches[0].Header.ParentHash
for i, reorg := range reorgs {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can more details be added in the coment ? - this logic is not straighforward

Copy link
Contributor

@otherview otherview Sep 12, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops, goland just reloaded and now saw that Matt made the same comment.

AvgBlockDuration: 1 * time.Second,
SimulationTime: 75 * time.Second,
L1EfficiencyThreshold: 0.2,
// Very hard to have precision here as blocks are continually produced and not dependent on the simulation execution thread
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why removing these 2 Efficiencies ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

they have been obsolete for ages.

@tudor-malene tudor-malene merged commit 764f98d into main Sep 12, 2023
2 checks passed
@tudor-malene tudor-malene deleted the tudor/aggressive_header_compression branch September 12, 2023 11:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants